Goto

Collaborating Authors

 test chip


IBM's Research Group Set To Simplify Enterprise AI

#artificialintelligence

Close up view on a CPU with circuitry and binary numbers displayed on top of it. The scene is ... [ ] positioned on a blueprint surface. When you talk about AI and IBM, people typically think of Watson and, more recently, Project Debater. But IBM Research is also working on more generalized AI tools and accelerators for the data center applications. IBM Research has an AI Hardware Center up near Albany, NY which has been investigating a number of projects involving more generalized AI performance and power improvements.


Photonics startup Lightmatter details its AI optical accelerator chip

#artificialintelligence

Ahead of the Hot Chips conference this week, photonics chip startup Lightmatter revealed the first technical details about its upcoming test chip. Unlike conventional processors and graphics cards, the test chip uses light to send signals, promising orders of magnitude higher performance and efficiency. The technology underpinning the test chip -- photonic integrated circuits -- stems from a 2017 paper coauthored by Lightmatter CEO and MIT alumnus Nicholas Harris that described a novel way to perform machine learning workloads using optical interference. Chips like the test chip, which is on track for a fall 2021 release, require only a limited amount of energy because light produces less heat than electricity. They also benefit from reduced latency and are less susceptible to changes in temperature, electromagnetic fields, and noise.


Intel Pioneers New Technologies to Advance Artificial Intelligence Intel Newsroom

#artificialintelligence

Today I spoke at the WSJDLive global technology conference about cognitive and artificial intelligence (AI) technology, two nascent areas that I believe will be transformative to the industry and world. These systems also offer tremendous market opportunity and are on a trajectory to reach $46 billion in industry revenue by 20201. As part of this, today we announced that Intel will ship the industry's first silicon for neural network processing, the Intel Nervana Neural Network Processor (NNP), before the end of this year. We are thrilled to have Facebook in close collaboration sharing its technical insights as we bring this new generation of AI hardware to market. The Intel Nervana NNP promises to revolutionize AI computing across myriad industries.


Self learning chip promises to accelerate artificial learning Robotics Research

#artificialintelligence

Imagine a future where complex decisions could be made faster and adapt over time. Where societal and industrial problems can be autonomously solved using learned experiences. It's a future where first responders using image-recognition applications can analyze streetlight camera images and quickly solve missing or abducted person reports. It's a future where stoplights automatically adjust their timing to sync with the flow of traffic, reducing gridlock and optimizing starts and stops. It's a future where robots are more autonomous and performance efficiency is dramatically increased.


Intel's New Self-Learning Chip Promises to Accelerate Artificial Intelligence Intel Newsroom

#artificialintelligence

Imagine a future where complex decisions could be made faster and adapt over time. Where societal and industrial problems can be autonomously solved using learned experiences. It's a future where first responders using image-recognition applications can analyze streetlight camera images and quickly solve missing or abducted person reports. It's a future where stoplights automatically adjust their timing to sync with the flow of traffic, reducing gridlock and optimizing starts and stops. It's a future where robots are more autonomous and performance efficiency is dramatically increased.


An Analog Neural Network Inspired by Fractal Block Coding

Pineda, Fernando J., Andreou, Andreas G.

Neural Information Processing Systems

We consider the problem of decoding block coded data, using a physical dynamical system. We sketch out a decompression algorithm for fractal block codes and then show how to implement a recurrent neural network using physically simple but highly-nonlinear, analog circuit models of neurons and synapses. The nonlinear system has many fixed points, but we have at our disposal a procedure to choose the parameters in such a way that only one solution, the desired solution, is stable. As a partial proof of the concept, we present experimental data from a small system a 16-neuron analog CMOS chip fabricated in a 2m analog p-well process. This chip operates in the subthreshold regime and, for each choice of parameters, converges to a unique stable state. Each state exhibits a qualitatively fractal shape.


Performance of a Stochastic Learning Microchip

Alspector, Joshua, Gupta, Bhusan, Allen, Robert B.

Neural Information Processing Systems

We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.


Performance of a Stochastic Learning Microchip

Alspector, Joshua, Gupta, Bhusan, Allen, Robert B.

Neural Information Processing Systems

We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.


Performance of a Stochastic Learning Microchip

Alspector, Joshua, Gupta, Bhusan, Allen, Robert B.

Neural Information Processing Systems

We have fabricated a test chip in 2 micron CMOS technology that embodies these ideas and we report our evaluation of the microchip and our plans for improvements. Knowledge is encoded in the test chip by presenting digital patterns to it that are examples of a desired input-output Boolean mapping. This knowledge is learned and stored entirely on chip in a digitally controlled synapse-like element in the form of connection strengths between neuron-like elements. The only portion of this learning system which is off chip is the VLSI test equipment used to present the patterns. This learning system uses a modified Boltzmann machine algorithm[3] which, if simulated on a serial digital computer, takes enormous amounts of computer time. Our physical implementation is about 100,000 times faster. The test chip, if expanded to a board-level system of thousands of neurons, would be an appropriate architecture for solving artificial intelligence problems whose solutions are hard to specify using a conventional rule-based approach. Examples include speech and pattern recognition and encoding some types of expert knowledge.


Programmable Synaptic Chip for Electronic Neural Networks

Moopenn, Alexander, Langenbacher, H., Thakoor, A. P., Khanna, S. K.

Neural Information Processing Systems

PROGRAMMABLE SYNAPTIC CHIP FOR ELECTRONIC NEURAL NETWORKS A. Moopenn, H. Langenbacher, A.P. Thakoor, and S.K. Khanna Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91009 ABSTRACT A binary synaptic matrix chip has been developed for electronic neural networks. The matrix chip contains a programmable 32X32 array of "long channel" NMOSFET binary connection elements implemented in a 3-um bulk CMOS process. Since the neurons are kept offchip, the synaptic chip serves as a "cascadable" building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silcon area.